A Risk measure is used to determine the amount of an asset or set of assets (traditionally currency) to be kept in reserve. The purpose of this reserve is to make the risks taken by financial institutions, such as banks and insurance companies, acceptable to the regulator. In recent years attention has turned towards convex and coherent risk measurement.
Contents |
A risk measure is defined as a mapping from a set of random variables to the real numbers. This set of random variables represents the risk at hand. The common notation for a risk measure associated with a random variable is . A risk measure should have certain properties:
In a situation with -valued portfolios such that risk can be measured in of the assets, then a set of portfolios is the proper way to depict risk. Set-valued risk measures are useful for markets with transaction costs.[1]
A set-valued risk measure is a function , where is a -dimensional Lp space, , and where is a constant solvency cone and is the set of portfolios of the reference assets. must have the following properties:[2]
Variance (or standard deviation) is not a risk measure. This can be seen since it has neither the translation property or monotonicity. That is for all , and a simple counterexample for monotonicity can be found. The standard deviation is a deviation risk measure.
There is a one-to-one correspondence between an acceptance set and a corresponding risk measure. As defined below it can be shown that and .
There is a one-to-one relationship between a deviation risk measure D and an expectation-bounded risk measure where for any
is expectation bounded if for any nonconstant X and for any constant X.[3]